DTE AICCOMAS 2025

Keynote

Shallow Recurrent Decoders for Reduced Order Modeling

  • Kutz, Nathan (University of Washington)
  • Gao, Mars (University of Washington)
  • Williams, Jan (University of Washington)

Please login to view abstract download link

accelerating engineering design and characterization. Specifically, the integration of scientific computing with machine learning is now an integral part of every field of application, with fast numerical solvers and methods playing a critically enabling role in the modeling of high-dimensional, complex dynamical systems. We propose a shallow recurrent decoder (SHRED)[1] neural network structure for model reduction which incorporates (i) a recurrent neural network to learn a latent representation of the temporal dynamics of the sensors and (ii) a shallow decoder that learns a mapping between this latent representation and the high-dimensional state space. SHRED enables accurate reconstructions with far fewer sensors, outperforms existing techniques when more measurements are available and is more robust to random sensor placements. In the example cases explored, complex spatio-temporal dynamics are characterized by exceedingly limited sensors that can be randomly placed with minimal loss of performance. The deep learning architecture can be trained on compressive representations of the data, enabling laptop level computing for training on large spatio-temporal data sets. SHRED can further be integrated with the sparse identification of nonlinear dynamics (SINDy) algorithm which exploits the latent space of recurrent neural networks for sparse sensor modeling and the construction of interpretable models. In this way, SINDy-SHRED enables a robust and sample-efficient joint discovery of the governing equation and coordinate system. With the correct governing equation, SINDy-SHRED can perform an accurate long-term prediction in a learned latent space, and in turn allow for long-term forecasting for the reduced order model.